Surviving Peer Review

Handling Tough Comments on Data and SOTA Comparisons

Photo by Clarisse Crosset on Unsplash

Properly handling tough comments in the peer review process is complex; here are some brief thoughts concerning two of these, particularly relevant to machine learning research: implementing state-of-the-art methods and validating your findings using a larger dataset. Understanding why these arise and what are possibilities to navigate them will help you succeed in your career as a researcher and computer scientist. Note: these are brief comments only; I may expand the article later when I find time.

Why those comments are challenging

When publishing academic work in computer science acquiring more data and comparing your proposed algorithms against state-of-the-art (SOTA) models are among the most challenging reviewer requests, for several reasons:

  1. Acquiring more data
    • collecting data can be time-consuming. In some cases (e.g., applied work in medical imaging or finance), ethical approvals or permissions can be required.
    • the additional data might introduce variability, making results less favorable or requiring further adjustments.
    • processing and analyzing new data adds a significant workload.
  2. Comparing against SOTA models
    • reproducibility issues can arise, as some SOTA models do not release code or require extensive hyperparameter tuning, making fair comparison difficult.
    • there can be challenges due to computational costs, as deep learning models often require powerful GPUs and long training times.
    • lastly, the outcome can be negative: there is a risk that the SOTA method outperforms your approach, weakening your contribution.

Why these comments are formulated

Would a request to add more data or include a SOTA comparison suggest that your research is relatively weak on crucial aspects? Not necessarily:

  • High standards for publication: many journals, especially top-tier ones, expect rigorous experimental validation, even for strong papers.
  • Routine reviewer expectations: some reviewers always request more data or comparisons as a default without fully considering feasibility.
  • A sign of interest, not rejection: if reviewers thought that your work was fundamentally flawed, they might have outright rejected it rather than suggested improvements. These comments suggest they see value but want stronger validation.
  • Field-specific bias: in some fields (e.g., machine learning, medical imaging), it is standard to compare against deep learning, even if the study is not focused on it.

General tips

Here is some advice based on personal experience:

  • Concerning open-source datasets, one source that I often like using is Google Dataset Search.
  • One may find common SOTA models and datasets in peer review articles. For instance, I have recently been passionate about video forecasting models, the following paper in TPAMI from Oprea et al. lists available methods and data used in that field. Being familiar with the academic literature in your field is key to understand how your study compares and in which setting you have a chance to beat SOTA models.
  • One can have a look at Papers with Code for SOTA models.
  • Don't hesitate to use AI-tools! ChatGPT, Perplexity, Claude... Although they are not perfect yet, they can help steer you in some positive direction.

Time management

The most difficult aspect of these comments is usually the time commitment that they require. It is easy to formulate criticism, but it can take you weeks or months to address them. Peer review in academia can be unpredictable: sometimes you may get relatively easy comments to answer to, but sometimes, you will need to make a more substantial investment of your time.

  • Make a timeline about how you wish to stay organized. If you don't have enough time, you can request a deadline extension.
  • Ask yourself what you will sacrifice and reorganize your schedule (if needed) to deliver on your commitment to research and your career. A publication in a high-impact factor journal can open unexpected doors for you. Think about what motivates you in your research: what would be the benefits for the community if your work were to be published?
  • Exploratory research before the initial submission may be the most enjoyable part of the process, handling reviewer comments can, however, feel like a race. At the resubmission stage, your day-to-day life may become more about (forced) execution than exploration. Consequently, you need to emphasize even more on adopting the right habits (wake up early, do sport, eat well...) and execute on your plans.

One possible solution: partial revisions

If answering such requests feels impossible, one solution could be to request a partial revision. Unless the justification is strong, a partial revision is unlikely to be accepted in high-impact journals (typically with an impact factor greater than 5.0). Top-tier journals often enforce high experimental standards. However, the editor will make a decision at their own discretion. Some editors are flexible if you can convince them that the paper still makes a substantial contribution without the additional experiments. The following could be included in a partial revision:

  1. Strengthening existing results – instead of adding new data, you could provide additional analysis on the current dataset (e.g., statistical analysis, visualization).
  2. Theoretical or conceptual justifications – if adding SOTA comparisons is challenging, you could:
    • clearly state why your method differs from deep learning approaches (e.g., interpretability, lower computational cost, domain-specific advantages).
    • cite existing comparative studies that highlight deep learning's limitations in your context.
  3. Acknowledging limitations transparently – journals appreciate honesty, so you can:
    • add a discussion about potential future work, stating that while deep learning comparisons would be ideal, they are beyond the scope due to computational or data constraints.
    • mention the results of some previous experiments where your method failed in particular settings.

Other possible challenging comments

Are these the hardest reviewer comments? The two comments above are certainly among the hardest, but other tough ones include:

  • "The proposed method is fundamentally flawed." If reviewers claim that your approach is incorrect or theoretically unsound, the entire study might need rethinking.
  • "The novelty is insufficient." If they argue your method is too similar to prior work, proving novelty is difficult.
  • "The data do not support your conclusions." If your claims do not match the evidence, reworking the analysis may be impossible without new experiments.
Published on April 5, 2025